A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors. (arXiv:2207.11621v2 [stat.ML] UPDATED)
In this work we establish an algorithm and distribution independent
non-asymptotic trade-off between the model size, excess test loss, and training
loss of linear predictors. Specifically, we show that models that perform well
on the test data (have low excess loss) are either "classical" -- have training
loss close to the noise level, or are "modern" -- have a much larger number of
parameters compared to the minimum needed to fit the training data exactly.
We also provide a more precise asymptotic analysis when the limiting spectral
distribution of the whitened features is Marchenko-Pastur. Remarkably, while
the Marchenko-Pastur analysis is far more precise near the interpolation peak,
where the number of parameters is just enough to fit the training data, in
settings of most practical interest it differs from the distribution
independent bound by only a modest multiplicative constant.
( 2
min )